12 research outputs found

    Decreasing the human coding burden in randomized trials with text-based outcomes via model-assisted impact analysis

    Full text link
    For randomized trials that use text as an outcome, traditional approaches for assessing treatment impact require that each document first be manually coded for constructs of interest by trained human raters. This process, the current standard, is both time-consuming and limiting: even the largest human coding efforts are typically constrained to measure only a small set of dimensions across a subsample of available texts. In this work, we present an inferential framework that can be used to increase the power of an impact assessment, given a fixed human-coding budget, by taking advantage of any ``untapped" observations -- those documents not manually scored due to time or resource constraints -- as a supplementary resource. Our approach, a methodological combination of causal inference, survey sampling methods, and machine learning, has four steps: (1) select and code a sample of documents; (2) build a machine learning model to predict the human-coded outcomes from a set of automatically extracted text features; (3) generate machine-predicted scores for all documents and use these scores to estimate treatment impacts; and (4) adjust the final impact estimates using the residual differences between human-coded and machine-predicted outcomes. As an extension to this approach, we also develop a strategy for identifying an optimal subset of documents to code in Step 1 in order to further enhance precision. Through an extensive simulation study based on data from a recent field trial in education, we show that our proposed approach can be used to reduce the scope of a human-coding effort while maintaining nominal power to detect a significant treatment impact

    The Effects of Comparable‐Case Guidance on Awards for Pain and Suffering and Punitive Damages: Evidence from a Randomized Controlled Trial

    Get PDF
    Damage awards for pain and suffering and punitive damages are notoriously unpredictable. Courts provide minimal, if any, guidance to jurors determining these awards, and apply similarly minimal standards in reviewing them. Lawmakers have enacted crude measures, such as damage caps, aimed at curbing award unpredictability, while ignoring less drastic alternatives that involve guiding jurors with information regarding damage awards in comparable cases (“comparable‐case guidance” or “prior‐award information”). The primary objections to the latter approach are based on the argument that, because prior‐award information uses information regarding awards in distinct cases, it introduces the possibility of biasing the award, or distorting the award size, even if prioraward information reduces the variability of awards. This paper responds to these objections. It reports and interprets the results of a large randomized controlled trial designed to test juror behavior in response to prior‐award information and, specifically, to examine the effects of prioraward information on both variability and bias under a range of conditions related to the foregoing objections. We conclude that there is strong evidence that prior‐award information improves the “accuracy” of awards—that it significantly reduces the variability of awards, and that any introduction of bias, or distortion of award size, is minor relative to its beneficial effect on variability. Furthermore, we conclude that there is evidence that jurors respond to prior‐award information as predicted in recent literature, and in line with the “optimal” use of such information; and that prior‐award information may cause jurors to approach award determinations more thoughtfully or analytically

    Leveraging text data for causal inference using electronic health records

    Full text link
    Text is a ubiquitous component of medical data, containing valuable information about patient characteristics and care that are often missing from structured chart data. Despite this richness, it is rarely used in clinical research, owing partly to its complexity. Using a large database of patient records and treatment histories accompanied by extensive notes by attendant physicians and nurses, we show how text data can be used to support causal inference with electronic health data in all stages, from conception and design to analysis and interpretation, with minimal additional effort. We focus on studies using matching for causal inference. We augment a classic matching analysis by incorporating text in three ways: by using text to supplement a multiple imputation procedure, we improve the fidelity of imputed values to handle missing data; by incorporating text in the matching stage, we strengthen the plausibility of the matching procedure; and by conditioning on text, we can estimate easily interpretable text-based heterogeneous treatment effects that may be stronger than those found across categories of structured covariates. Using these techniques, we hope to expand the scope of secondary analysis of clinical data to domains where quantitative data is of poor quality or nonexistent, but where text is available, such as in developing countries

    Replication Data for: Matching with Text Data: An Experimental Evaluation of Methods for Matching Documents and of Measuring Match Quality

    No full text
    This repository contains the materials needed to replicate the results presented in Mozer et al. (2019), "Matching with Text Data: An Experimental Evaluation of Methods for Matching Documents and of Measuring Match Quality", forthcoming in Political Analysis

    The Effects of Comparable‐Case Guidance on Awards for Pain and Suffering and Punitive Damages: Evidence from a Randomized Controlled Trial

    Get PDF
    Damage awards for pain and suffering and punitive damages are notoriously unpredictable. Courts provide minimal, if any, guidance to jurors determining these awards, and apply similarly minimal standards in reviewing them. Lawmakers have enacted crude measures, such as damage caps, aimed at curbing award unpredictability, while ignoring less drastic alternatives that involve guiding jurors with information regarding damage awards in comparable cases (“comparable‐case guidance” or “prior‐award information”). The primary objections to the latter approach are based on the argument that, because prior‐award information uses information regarding awards in distinct cases, it introduces the possibility of biasing the award, or distorting the award size, even if prioraward information reduces the variability of awards. This paper responds to these objections. It reports and interprets the results of a large randomized controlled trial designed to test juror behavior in response to prior‐award information and, specifically, to examine the effects of prioraward information on both variability and bias under a range of conditions related to the foregoing objections. We conclude that there is strong evidence that prior‐award information improves the “accuracy” of awards—that it significantly reduces the variability of awards, and that any introduction of bias, or distortion of award size, is minor relative to its beneficial effect on variability. Furthermore, we conclude that there is evidence that jurors respond to prior‐award information as predicted in recent literature, and in line with the “optimal” use of such information; and that prior‐award information may cause jurors to approach award determinations more thoughtfully or analytically

    Lessons Learned From VHA\u27s Rapid Implementation of Virtual Whole Health Peer-Led Groups During the COVID-19 Pandemic: Staff Perspectives

    Get PDF
    Background: Committed to implementing a person-centered, holistic (Whole Health) system of care, the Veterans Health Administration (VHA) developed a peer-led, group-based, multi-session Taking Charge of My Life and Health (TCMLH) program wherein Veterans reflect on values, set health and well-being-related goals, and provide mutual support. Prior work has demonstrated the positive impact of these groups. After face-to-face TCMLH groups were disrupted by the COVID-19 pandemic, VHA facilities rapidly implemented virtual (video-based) TCMLH groups. Objective: We sought to understand staff perspectives on the feasibility, challenges, and advantages of conducting TCMLH groups virtually. Methods: We completed semi-structured telephone interviews with 35 staff members involved in the implementation of virtual TCMLH groups across 12 VHA facilities and conducted rapid qualitative analysis of the interview transcripts. Results: Holding TCMLH groups virtually was viewed as feasible. Factors that promoted the implementation included use of standardized technology platforms amenable to delivery of group-based curriculum, availability of technical support, and adjustments in facilitator delivery style. The key drawbacks of the virtual format included difficulty maintaining engagement and barriers to relationship-building among participants. The perceived advantages of the virtual format included the positive influence of being in the home environment on Veterans\u27 reflection, motivation, and self-disclosure, the greater convenience and accessibility of the virtual format, and the virtual group\u27s role as an antidote to isolation during the COVID-19 pandemic. Conclusion: Faced with the disruption caused by the COVID-19 pandemic, VHA pivoted by rapidly implementing virtual TCMLH groups. Staff members involved in implementation noted that delivering TCMLH virtually was feasible and highlighted both challenges and advantages of the virtual format. A virtual group-based program in which participants set and pursue personally meaningful goals related to health and well-being in a supportive environment of their peers is a promising innovation that can be replicated in other health systems
    corecore